13 research outputs found

    MRGM: An Adaptive Mechanism for Congestion Control in Smart Vehicular Network

    Get PDF
    Traffic flow on roads has increased manifolds from past few decades due to increase in number of vehicles and rise in population. With fixed road infrastructure and more vehicles on traffic routes lead to traffic congestion conditions especially in urban areas of developing nations. Traffic jams are normal in major cities which ultimately cause delay in travel time, more fuel consumption and more pollution. This manuscript propose a Multi-metric road guidance mechanism(MRGM) which considers multiple metrics to analyze the traffic congestion conditions and based on the conditions effective optimal routes are suggested to the vehicles. The Simulation of the proposed mechanism is performed with the SUMO by using the python script and the results show that proposed mechanism i.e MRGM outperforms other mechanism in terms of traffic efficiency, travel time, fuel consumption and pollution levels in the smart vehicular network

    Service vs Protection: A Bayesian Learning Approach for Trust Provisioning in Edge of Things Environment

    Get PDF
    Edge of Things (EoT) technology enables end-users participation with smart-sensors and mobile devices (such as smartphones, wearable devices) to the smart devices across the smart city. Trust management is the main challenge in EoT infrastructure to consider the trusted participants. The Quality of Service (QoS) is highly affected by malicious users with fake or altered data. In this paper, a Robust Trust Management (RTM) scheme is designed based on Bayesian learning and collaboration filtering. The proposed RTM model is regularly updated after a specific interval with the significant decay value to the current calculated scores to update the behavior changes quickly. The dynamic characteristics of edge nodes are analyzed with the new probability score mechanism from recent services’ behavior. The performance of the proposed trust management scheme is evaluated in a simulated environment. The percentage of collaboration devices are tuned as 10%, 50% and 100%. The maximum accuracy of 99.8% is achieved from the proposed RTM scheme. The experimental results demonstrate that the RTM scheme shows better performance than the existing techniques in filtering malicious behavior and accuracy

    DaaS: Dew Computing as a Service for Intelligent Intrusion Detection in Edge-of-Things Ecosystem

    Get PDF
    Edge of Things (EoT) enables the seamless transfer of services, storage, and data processing from the cloud layer to edge devices in a large-scale distributed Internet of Things (IoT) ecosystems (e.g., Industrial systems). This transition raises the privacy and security concerns in the EoT paradigm distributed at different layers. Intrusion detection systems (IDSs) are implemented in EoT ecosystems to protect the underlying resources from attackers. However, the current IDSs are not intelligent enough to control the false alarms, which significantly lower the reliability and add to the analysis burden on the IDSs. In this article, we present a Dew Computing as a Service (DaaS) for intelligent intrusion detection in EoT ecosystems. In DaaS, a deep learning-based classifier is used to design an intelligent alarm filtration mechanism. In this mechanism, the filtration accuracy is improved (or sustained) by using deep belief networks. In the past, the cloud-based techniques have been applied for offloading the EoT tasks, which increases the middle layer burden and raises the communication delay. Here, we introduce the dew computing features that are used to design the smart false alarm reduction system. DaaS, when experimented in a simulated environment, reflects lower response time to process the data in the EoT ecosystem. The revamped DBN model achieved the classification accuracy up to 95%. Moreover, it depicts a 60% improvement in the latency and 35% workload reduction of the cloud servers as compared to edge IDS

    An optimized cost-based data allocation model for heterogeneous distributed computing systems

    Get PDF
    Continuous attempts have been made to improve the flexibility and effectiveness of distributed computing systems. Extensive effort in the fields of connectivity technologies, network programs, high processing components, and storage helps to improvise results. However, concerns such as slowness in response, long execution time, and long completion time have been identified as stumbling blocks that hinder performance and require additional attention. These defects increased the total system cost and made the data allocation procedure for a geographically dispersed setup difficult. The load-based architectural model has been strengthened to improve data allocation performance. To do this, an abstract job model is employed, and a data query file containing input data is processed on a directed acyclic graph. The jobs are executed on the processing engine with the lowest execution cost, and the system's total cost is calculated. The total cost is computed by summing the costs of communication, computation, and network. The total cost of the system will be reduced using a Swarm intelligence algorithm. In heterogeneous distributed computing systems, the suggested approach attempts to reduce the system's total cost and improve data distribution. According to simulation results, the technique efficiently lowers total system cost and optimizes partitioned data allocation

    Opportunistic Networks: Present Scenario- A Mirror Review

    Get PDF
    Opportunistic Network is form of Delay Tolerant Network (DTN) and regarded as extension to Mobile Ad Hoc Network. OPPNETS are designed to operate especially in those environments which are surrounded by various issues like- High Error Rate, Intermittent Connectivity, High Delay and no defined route between source to destination node. OPPNETS works on the principle of “Store-and-Forward” mechanism as intermediate nodes perform the task of routing from node to node. The intermediate nodes store the messages in their memory until the suitable node is not located in communication range to transfer the message to the destination. OPPNETs suffer from various issues like High Delay, Energy Efficiency of Nodes, Security, High Error Rate and High Latency. The aim of this research paper is to overview various routing protocols available till date for OPPNETs and classify the protocols in terms of their performance. The paper also gives quick review of various Mobility Models and Simulation tools available for OPPNETs simulation

    Adaptive Recovery Mechanism for SDN Controllers in Edge-Cloud supported FinTech Applications

    Get PDF
    Financial Technology have revolutionized the delivery and usage of the autonomous operations and processes to improve the financial services. However, the massive amount of data (often called as big data) generated seamlessly across different geographic locations can end end up as a bottleneck for the underlying network infrastructure. To mitigate this challenge, software-defined network (SDN) has been leveraged in the proposed approach to provide scalability and resilience in multi-controller environment. However, in case if one of these controllers fail or cannot work as per desired requirements, then either the network load of that controller has to be migrated to another suitable controller or it has to be divided or balanced among other available controllers. For this purpose, the proposed approach provides an adaptive recovery mechanism in a multi-controller SDN setup using support vector machine-based classification approach. The proposed work defines a recovery pool based on the three vital parameters, reliability, energy, and latency. A utility matrix is then computed based on these parameters, on the basis of which the recovery controllers are selected. The results obtained prove that it is able to perform well in terms of considered evaluation parameters

    VNE solution for network differentiated QoS and security requirements: from the perspective of deep reinforcement learning

    Get PDF
    The rapid development and deployment of network services has brought a series of challenges to researchers. On the one hand, the needs of Internet end users/applications reflect the characteristics of travel alienation, and they pursue different perspectives of service quality. On the other hand, with the explosive growth of information in the era of big data, a lot of private information is stored in the network. End users/applications naturally start to pay attention to network security. In order to solve the requirements of differentiated quality of service (QoS) and security, this paper proposes a virtual network embedding (VNE) algorithm based on deep reinforcement learning (DRL), aiming at the CPU, bandwidth, delay and security attributes of substrate network. DRL agent is trained in the network environment constructed by the above attributes. The purpose is to deduce the mapping probability of each substrate node and map the virtual node according to this probability. Finally, the breadth first strategy (BFS) is used to map the virtual links. In the experimental stage, the algorithm based on DRL is compared with other representative algorithms in three aspects: long term average revenue, long term revenue consumption ratio and acceptance rate. The results show that the algorithm proposed in this paper has achieved good experimental results, which proves that the algorithm can be effectively applied to solve the end user/application differentiated QoS and security requirements

    Existing Path Planning Techniques in Unmanned Aerial Vehicles (UAVs):A Systematic Review

    No full text

    ReLeDP: Reinforcement-Learning-Assisted Dynamic Pricing for Wireless Smart Grid

    Get PDF
    The smart grid must ensure that power providers can obtain substantial benefits by selling energy, while at the same time, they need to consider the cost of consumers. To realize this win-win situation, the smart grid relies on dynamic pricing mechanisms. However, most of the existing dynamic pricing schemes are based on artificial objective rules or conventional models, which cannot ensure the desired effectiveness. Thus, we apply reinforcement learning to model the supply-demand relationship between power providers and consumers in a smart grid. The dynamic pricing problem of the smart grid is modeled as a discrete Markov decision process, and the decision process is solved by Q-learning. Now, the success of any intelligent dynamic pricing scheme relies on timely data transmission. However, the scale and speed of data generation can create several network bottlenecks that can further reduce the performance of any dynamic pricing scheme. Hence, to overcome this challenge, we have proposed an artificial-intelligence-based adaptive network architecture that adopts software-defined networking. In this architecture, we have used a self-organized map-based traffic classification approach followed by a dynamic virtual network embedding mechanism. We demonstrate the effectiveness of the dynamic pricing strategy supported through adaptive network architecture based on various performance indicators. The outcomes suggest that the proposed strategy is of great significance to realize the sustainability of power energy in the future. Lastly, we discuss various implementation challenges and future directions before concluding the article
    corecore